Goto

Collaborating Authors

 cultural inclusivity


WorldView-Bench: A Benchmark for Evaluating Global Cultural Perspectives in Large Language Models

Mushtaq, Abdullah, Taj, Imran, Naeem, Rafay, Ghaznavi, Ibrahim, Qadir, Junaid

arXiv.org Artificial Intelligence

Large Language Models (LLMs) are predominantly trained and aligned in ways that reinforce Western-centric epistemologies and socio-cultural norms, leading to cultural homogenization and limiting their ability to reflect global civilizational plurality . Existing benchmarking frameworks fail to adequately capture this bias, as they rely on rigid, closed-form assessments that overlook the complexity of cultural inclusivity. To address this, we introduce WorldView-Bench, a benchmark designed to evaluate Global Cultural Inclusiv-ity (GCI) in LLMs by analyzing their ability to accommodate diverse worldviews. Our approach is grounded in the Multiplex Worldview proposed by Senturk et al., which distinguishes between Uniplex models, reinforcing cultural homogenization, and Multiplex models, which integrate diverse perspectives. WorldView-Bench measures Cultural Polarization, the exclusion of alternative perspectives, through free-form generative evaluation rather than conventional categorical benchmarks. We implement applied multiplex-ity through two intervention strategies: (1) Contextually-Implemented Multiplex LLMs, where system prompts embed multiplexity principles, and (2) Multi-Agent System (MAS)- Implemented Multiplex LLMs, where multiple LLM agents representing distinct cultural perspectives collaboratively generate responses. Our results demonstrate a significant increase in Perspectives Distribution Score (PDS) entropy from 13% at baseline to 94% with MAS-Implemented Multiplex LLMs, alongside a shift toward positive sentiment (67.7%) and enhanced cultural balance. These findings highlight the potential of multiplex-aware AI evaluation in mitigating cultural bias in LLMs, paving the way for more inclusive and ethically aligned AI systems.


Advancing Cultural Inclusivity: Optimizing Embedding Spaces for Balanced Music Recommendations

Moradi, Armin, Neophytou, Nicola, Farnadi, Golnoosh

arXiv.org Artificial Intelligence

Popularity bias in music recommendation systems -- where artists and tracks with the highest listen counts are recommended more often -- can also propagate biases along demographic and cultural axes. In this work, we identify these biases in recommendations for artists from underrepresented cultural groups in prototype-based matrix factorization methods. Unlike traditional matrix factorization methods, prototype-based approaches are interpretable. This allows us to directly link the observed bias in recommendations for minority artists (the effect) to specific properties of the embedding space (the cause). We mitigate popularity bias in music recommendation through capturing both users' and songs' cultural nuances in the embedding space. To address these challenges while maintaining recommendation quality, we propose two novel enhancements to the embedding space: i) we propose an approach to filter-out the irrelevant prototypes used to represent each user and item to improve generalizability, and ii) we introduce regularization techniques to reinforce a more uniform distribution of prototypes within the embedding space. Our results demonstrate significant improvements in reducing popularity bias and enhancing demographic and cultural fairness in music recommendations while achieving competitive -- if not better -- overall performance.


Language and cultural inclusivity for chatbots 'very important' to OpenAI's mission, CEO says

FOX News

OpenAI CEO Sam Altman said language and cultural inclusivity is'very important' to his company's mission as it builds and trains powerful artificial intelligence systems. OpenAI CEO Sam Altman said language and cultural inclusivity are "very important" to his company's mission as it builds and trains powerful artificial intelligence systems. "We think this is really important," Altman told California Democratic Sen. Alex Padilla of language inclusivity in AI. "One example is that we worked with the government of Iceland, which is a language of fewer speakers than many of the languages that are well represented on the internet, to ensure that their language was included in our model," Altman said. The Senate Judiciary Subcommittee on Privacy, Technology and the Law held a hearing Tuesday during which Altman, IBM Chief Privacy & Trust Officer Christina Montgomery and New York University professor emeritus Gary Marcus delivered testimony on how best to regulate powerful artificial intelligence systems. Sam Altman, CEO and co-founder of OpenAI, speaks during a Senate Judiciary subcommittee hearing in Washington, D.C., Tuesday, May 16, 2023.